Statistical regularities compress numerical representations
نویسندگان
چکیده
منابع مشابه
Statistical regularities compress numerical representations.
Numerical information can be perceived at multiple levels of abstraction (e.g., one bird, or a flock of birds). The unit of input for numerosity perception can therefore involve a discrete object, or a set of objects grouped by shared features (e.g., color). Here we examine how the mere co-occurrence of objects shapes numerosity perception. Across three between-subjects experiments, observers v...
متن کاملSemantic Regularities in Document Representations
Recent work exhibited that distributed word representations are good at capturing linguistic regularities in language. This allows vector-oriented reasoning based on simple linear algebra between words. Since many different methods have been proposed for learning document representations, it is natural to ask whether there is also linear structure in these learned representations to allow simil...
متن کاملStatistical regularities reduce perceived numerosity.
Numerical information can be perceived at multiple levels (e.g., one bird, or a flock of birds). The level of input has typically been defined by explicit grouping cues, such as contours or connecting lines. Here we examine how regularities of object co-occurrences shape numerosity perception in the absence of explicit grouping cues. Participants estimated the number of colored circles in an ar...
متن کاملExploiting Morphological Regularities in Distributional Word Representations
We present an unsupervised, language agnostic approach for exploiting morphological regularities present in high dimensional vector spaces. We propose a novel method for generating embeddings of words from their morphological variants using morphological transformation operators. We evaluate this approach on MSR word analogy test set (Mikolov et al., 2013d) with an accuracy of 85% which is 12% ...
متن کاملLinguistic Regularities in Continuous Space Word Representations
Continuous space language models have recently demonstrated outstanding results across a variety of tasks. In this paper, we examine the vector-space word representations that are implicitly learned by the input-layer weights. We find that these representations are surprisingly good at capturing syntactic and semantic regularities in language, and that each relationship is characterized by a re...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Vision
سال: 2015
ISSN: 1534-7362
DOI: 10.1167/15.12.390